1. Technical Field
The present disclosure relates to an emotion abreaction device and the using method of the emotion abreaction device.
2. Related Art
It's difficult to be a modern office staff, as the competition in the companies is very stressful, and it has a high requirement for the life quality. A survey shows that, nearly 8 out of 10 office staffs feel deeply depressed, and 2 of them even have the idea of committing suicide. Since modern people lack of the appropriate and correct means for abreaction, social phenomena such as melancholia, family violence, and alcohol abuse occurred accordingly also demand great attentions. Therefore, how to establish an appropriate and correct means for emotion abreaction has become a researching subject deserving great efforts.
Japanese Patent Publication No. 2005-185630 discloses an emotion mitigation system, which analyzes the noises received from a baby or an animal to determine whether it is in an emotionally nervous state. If the baby or animal is determined to be in an emotionally nervous state, the system will mitigate its nervous emotion through sounds, remotely-controlled toys, and remotely-controlled lamp lights. Japanese Patent Publication No. 2006-123136 discloses a communication robot, which analyzes the emotion state of the caller by retrieving his/her facial image and voice. If the caller is determined to be in an emotionally nervous state, the robot mitigates the caller's nervous emotion by way of singing a song and the like.
Accordingly, an exemplary embodiment of present disclosure is directed to an emotion abreaction device with preferred emotion mitigation and abreaction effects.
An exemplary embodiment of present disclosure is also directed to a using method of an emotion abreaction device with preferred emotion mitigation and abreaction effects.
An exemplary embodiment of the present disclosure provides an emotion abreaction device, which comprises a body, a control unit, a man machine interacting module, an image input unit and an emotion abreaction unit, wherein the control unit, the man machine interacting module, the image input unit, and the emotion abreaction unit are disposed in the body. The man machine interacting module is electrically connected to the control unit for the user to input commands to the control unit, which commands comprise selecting an emotion abreaction mode. The emotion abreaction unit is electrically connected to the control unit and has at least one sensor to measure force and/or volume, for the user to abreact his or her emotions by way of knocking and/or yelling. The emotion abreaction unit delivers a sensing result to the control unit, and the control unit controls the man machine interacting module to respond to the user with at least one of a voice and an image based on the sensing result. The image input unit is electrically connected to the control unit, and if configured to capture a first image of the user. A plurality of training images are grouped into a plurality of angle sets according to a light angle of each of the training images. In addition, an intensity of each of the training images changes based on the light angles. The control unit obtains a target grey level according to an average grey level of one of the angle sets. The control unit further obtains a feature value of the first image, adjusts the feature value according to the target grey level, detects a face part of the first image according to the adjusted feature value, and recognizes the face part.
An exemplary embodiment of the present disclosure provides a using method of an emotion abreaction device, which comprises: capturing a first image of a user; obtaining a target grey level, wherein a plurality of training images are grouped into a plurality of angle sets according to a light angle of each of the training images, and the target grey level is obtained according to an average grey level of one of the angle sets, wherein an intensity of each of the training images changes based on the light angles; obtaining a feature value of the first image, adjusting the feature value according to the target grey level, detecting a face part of the first image according to the adjusted feature value, and recognizing the face part. The using method further comprises: when a user knocks the emotion abreaction unit of the emotion abreaction device, measuring a magnitude of the user's knocking force, then responding to the user with at least one of a voice and an image based on the measured magnitude of the force; when the user yells to the emotion abreaction unit of the emotion abreaction device, measuring the magnitude of the volume of the user's yelling, and then responding to the user with at least one of a voice and an image based on the measured magnitude of the volume.
It is to be understood that both the foregoing general description and the following detailed description are exemplary, and are intended to provide further explanation of the disclosure as claimed.
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Below, exemplary embodiments will be described in detail with reference to accompanying drawings so as to be easily realized by a person having ordinary knowledge in the art. The inventive concept may be embodied in various forms without being limited to the exemplary embodiments set forth herein. Descriptions of well-known parts are omitted for clarity, and like reference numerals refer to like elements throughout.
Although the emotion abreaction device 100 of this embodiment includes two emotion abreaction units of the yelling abreaction unit 140 and the knocking abreaction unit 150, it may optionally be configured with only the yelling abreaction unit 140 or the knocking abreaction unit 150. The yelling abreaction unit 140 has a volume sensor (not shown), which enables the user to abreact the emotions by way of yelling, and the volume sensor is also commonly referred as decibel meter. The knocking abreaction unit 150 has a force sensor (not shown), which enables the user to abreact the emotions by way of knocking, and the force sensor may be an accelerometer.
Since the emotion abreaction device 100 has the yelling abreaction unit 140 and the knocking abreaction unit 150, it enables the user to abreact the emotions through a relatively furious process, such as yelling or knocking etc, thereby achieving preferred effects in emotion mitigation and abreaction. Furthermore, the emotion abreaction device 100 can also measure the magnitude of the volume of the yelling and that of the knocking force of the user, and thereby responding to the user according to the measured results, and thus providing the user with a bi-directional interaction scenario and feeling during his or her emotion abreaction. Thus, the emotion mitigation and abreaction effect is further improved.
Other alternative variations in the emotion abreaction device 100 of this embodiment are described below with reference to
Furthermore, the image input unit 170 may also be used as an object detector, for detecting the approaching or departing of the user, and thereby automatically turning on or turning off the emotion abreaction device 100. Of course, the object detector may be an infrared detector or other suitable detectors. Although the image input unit 170 may be an image capturing device such as charge coupled device (CCD), it may alternatively be a card reader, an optical disk drive, a universal serial bus (USB), a blue-tooth transmission module, or any component that enables the user to input images into the emotion abreaction device 100 from an external device. Moreover, the emotion abreaction device 100 may be driven by various energies such as an internal battery, an externally-connected power source, or a solar cell.
Referring to
Next, in step S120, the user is selectively greeted with voice and/or image immediately after the emotion abreaction device has been turned on. For example, a greeting voice of “Good day, master, would you like to abreact your emotions?” is given out, or a greeting image is displayed, or both of the above voices and images are used.
Then, the user is selectively requested to choose at least one emotion abreaction mode from knocking and yelling, in step S130. For example, a voice of “Please select” is given out, or a menu image is displayed, or both of the above voices and images are used. If the emotion abreaction device 100 has a touch screen (for example, the man machine interacting module 130), it can further provide an option of doodling to the user. The process for providing the options to the user includes providing voice options or displayed on the screen, depending on whether the emotion abreaction device 100 has a unit for giving out voices or displaying pictures or not. Similarly, the user can select by means of providing voice commands, pressing keys, or pressing a touch screen, depending on the type of the command input interface provided by the man machine interacting module 130 of the emotion abreaction device 100. Of course, the emotion abreaction device 100 and the user may use other suitable means to provide options and select the options respectively.
If the user has selected to abreact the emotions by means of knocking, the user may be selectively indicated when to knock, in step S140. For example, the voice “5, 4, 3, 2, 1, please beat me!” is played, or a counting-down image is displayed, or both of the above voices and images are used. Then, when the user knocks the knocking abreaction unit 150 of the emotion abreaction unit 100, in step S145, the magnitude of the user's knocking force is measured.
If the user has selected to abreact the emotions by means of yelling, the user may be selectively indicated when to yell, in step S150. For example, the voice “5, 4, 3, 2, 1, please shout at me!” is played, or a counting-down image is displayed, or both of the above voices and images are used. Then, as the user yells at the yelling abreaction unit 140 of the emotion abreaction unit 100, in step S155, the magnitude of the volume of the user's yelling at the abreaction unit 140 is measured.
Furthermore, regardless of whether the user is knocking or yelling, the voices of “Sorry, I was wrong”, “Master, please forgive me”, or another voice that is helpful for the user to abreact the emotions is played synchronously, or otherwise, the picture of a twisted face or another picture that is helpful for the user to abreact the emotions is displayed in images, or both of the above voices and images are used.
If the user has selected to abreact the emotions by doodling, the user is selectively requested to select a built-in image or an externally-input image, such as a photo of an annoying guy, and the image is displayed on the touch screen (for example, the man machine interacting module 130), in step S160. If the user does not input or select an image, the control unit 120 can automatically determine the image to be displayed or determine to be blank on the screen. Then, the user doodles on the touch screen by hands or through an appropriate tool, e.g., a stylus, in step S165.
Then, based on the resulted doodling work, the magnitude of the force and/or the volume, the user is responded through a voice and/or an image, in step S170. The process for responding to the user includes appearing to be suffered or miserable, informing the user about the magnitude of the force or the volume, imitating running away by moving the emotion abreaction device 100 and/or encouraging the user. For example, the voice of “Master, you are terrific”, “Master, have you always being so strong”, “Master, your anger index is XX points”, or another voice that is helpful for the user to abreact emotions, otherwise, an image capable of achieving the same effect is displayed, or moving the body 110 by the moving unit 160 to imitate running away while the user is knocking, yelling, and/or doodling, or using a combination of the above processes.
Then, the user is selectively inquired whether to continue to abreact the emotions or not, in step S180. If the user wants to continue to abreact the emotions, it returns to S130, or jumps directly to the step S145, S155, or S165. If the user does not want to continue to abreact the emotions, the device is turned off, in step S190. Of course, if the user doesn't respond about whether to continue to abreact the emotions or not, the emotion abreaction device 100 may also be set to be automatically turned off after a certain waiting time.
It should be noted that, in the using method of this embodiment, after turning on of the emotion abreaction device 100, i.e., the step S110, the steps S120 to S160 may be skipped to enable the user to directly knock, yell, or doodle (steps S145, S155, S165), thereby providing the user with the most instant and rapid emotion abreaction. The flow chart is not additionally depicted herein.
The second embodiment is similar with the first embodiment, only the difference is described in the second embodiment.
Referring to
The image input unit 170 captures an image of the user 660 (as shown in
Referring to
In other embodiments, the images in the angle set 530 may be used to calculate the target grey level. The light angles may be “30”, “−30” or other values, and all the training images may be grouped into more angle sets or less angle sets. The disclosure is not limited thereto.
Referring to
Referring to
In particular, the feature value is adjusted according to the target grey level discussed above by the control unit 720. Take the sliding window 620 as an example, when determining if the sliding window is a face part, all the grey levels of the pixels in the sliding windows 620 are adjusted according to the formula (1) as follows:
where iold is a grey level of a pixel in the sliding window 620, t is the target grey level, Av is an average of the grey levels of the pixels in the sliding window 620, and iARM is an adjusted grey level. Although the formula (1) only defines the adjustment of grey levels, the feature value of a haar-like feature is adjusted as well because a haar-like feature extraction is a linear function (i.e. only summation and subtraction of grey levels). Therefore, in other embodiments, iold is a feature value of a haar-like feature, iARM is an adjusted feature value of the harr-like feature. After executing the formula (1), the grey levels in a sliding window are closer to the target grey level, which indicates an average grey level of an angle set having a better detection result. In other words, the goal of the formula (1) is to compensate the effect of different light angles in the embodiment.
After adjusting the feature values of the sliding window 620, the control unit 420 may apply a machine learning algorithm to execute a face detection procedure. For example, the control unit 420 may apply AdaBoost (Adaptive Boosting), SVM (Support Vector Machine), or a neural network, but the disclosure is not limited thereto. When determining the sliding window 620 is a face part, the control unit 420 further recognizes the face part to determine if the user 660 is using the emotion abreaction device 400 for the first time.
If determining that the user 620 is using the emotion abreaction device 400 for the first time, the control unit 420 will record a baseline profile of the user 660 when the user 660 is in a neutral mood. To be specific, the man machine interacting module display a message to guide the user 660 to stay in a neutral mood, and the yell abreaction unit 140 records a voice signal of the user 660. Then, the yell abreaction unit 140 transmits the voice signal to the control unit 420 as a baseline profile. Alternatively, the knock abreaction unit 150 estimates a magnitude (i.e. a first magnitude) of the user's 660 knocking force when the user 660 is in a neutral mood. Then, the knock abreaction unit 150 transmits the magnitude to the control unit 420 as the baseline profile. In one embodiment, as illustrated in
If the control unit 420 determines the user 660 is not using the emotion abreaction device 400 for the first time, the control unit 420 will obtains a current profile of the user 660. The current profile may include a magnitude of a knocking force, a voice signal or a location of a corner of a mouth. For example, when the user 660 is abreacting the emotions through yelling and knocking, the knock abreaction unit 150 estimates a magnitude (i.e. a second magnitude) of the user's 660 knocking force, and the yell abreaction unit 140 records a voice signal (i.e. a second voice signal) of the user 660. In addition, the second magnitude and the second voice signal are transmitted to the control unit 420 as the current profile. In one embodiment, as illustrated in
After obtaining the current profile, the control unit 420 would generate a happiness level by comparing the baseline profile with the current profile. For example, control unit 420 compares the first magnitude and second magnitude, or the first voice signal and the second voice signal to estimate how angry the user 660 is. Consequently, control unit 420 may control the man machine interacting module 130 to respond to the user 620 according to how angry the user 660 is. On the other hand, the control unit 420 compares the first locations of the corners 802 and 804, and the second locations of the corners 902 and 904 to determine if the user 660 has a smile on his/her face. For example, after the user 660 abreacts the emotion, the corners 902 and 904 are relatively higher than the first locations of the corners 802 and 804, so that the control unit 420 detects that the user 600 is smiling, that is represented as a high happiness level.
During recording the baseline profile and the current profile, the control unit 420 needs to detect the corners 802 and 804 of the mouth 820, and the corners 902 and 904 of the mouth 920 to get their locations. In the embodiment, a symmetrical rectangle is provided to detect corners of a mouth.
Referring to
Referring to
It should be noticed that the symmetrical rectangle 1100 may be used to detect eyes, ears, or other symmetrical features. In addition, the symmetrical rectangle 1000 may has a different width, a different height, or any number of sub-rectangles. The disclosure is not limited thereto.
Referring to
In step S1204, the control unit 420 determines if the user is using the emotion abreaction device for the first time according to the result of the recognition. If the user is using for the first time, in step S1206, the control unit learns the personal profile (i.e. baseline profile) of the user. For example, the baseline profile may include a voice signal, a magnitude of a knocking, or the location of a corner of a mouth.
If the user is not using the emotion abreaction device for the first time or after step S1206, in step S1208, the control unit 420 controls the man machine interacting module 130 to request the user to select an mode. For example, the mode is selected from a yelling mode, and a knocking mode.
In step S1210, the man machine interacting module 130 indicates that is ready for knocking. In step S1212, the knock abreaction unit 150 receives a user's knocking, and estimates a magnitude of the knocking.
In step S1214, the man machine interacting module 130 indicate that it is ready for yelling. In step S1216, the yell abreaction unit 140 receives a user's yells and records a voice signal of the yells.
In step S1218, the control unit 420 estimates a happiness level by comparing the baseline profile and the current profile. For example, the current profile may include the voice signal recorded in step S1216, a magnitude estimated in step S1212, or a location of a corner of a mouth.
In step S1220, the man machine interacting module 130 inquires the user whether to continue with abreaction or not. If the user decides to continue, go back to step S1208, if not, the method ends.
However, all the steps in
Referring to
If the user is yelling, in step S1310, the yell abreaction unit 140 measures the magnitude of the volume of the user's yell. In step S1312, the man machine interacting module 130 respond to the user with at least one of a voice and an image based on the measured magnitude of the volume.
If the user is knocking, in step S1314, the knock abreaction unit 150 measures a magnitude of the user's knocking force. In step S1316, the man machine interacting module 130 responds to the user with at least one of a voice and an image based on the measured magnitude of the force.
However, all the steps in the
In view of the above, the emotion abreaction device of one embodiment of the present disclosure enables the user to abreact the emotions through a furious means of knocking and/or yelling, and has at least one sensor for sensing the magnitude of the force and/or the volume to respond to the user accordingly. Furthermore, a feature value is adjusted to alleviate the effects of different light angles and to improve the detecting result. In the using method of the emotion abreaction device of one embodiment of the present disclosure, the baseline profile and the current profile are obtained. By comparing the baseline profile and the current profile, a happiness level of a user is estimated accurately, and the emotion abreaction device may appropriately responds to the user. Thus, it enables the user to deeply feel the bi-directional interaction scenario. Therefore, it provides an appropriate and harmless process for abreaction, reduce social problems, and improve life quality, and enable users to achieve a complete abreaction in the aspects of both physiology and psychology.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the present disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
95149995 | Dec 2006 | TW | national |
This application is a continuation-in-part of and claims the priority benefit of U.S. application Ser. No. 11/696,189, filed on Apr. 4, 2007, now pending, which claims the priority benefit of Taiwan application serial no. 95149995, filed on Dec. 29, 2006. The entirety of each of the above-mentioned patent applications is hereby incorporated by reference herein and made a part of this specification.
Number | Date | Country | |
---|---|---|---|
Parent | 11696189 | Apr 2007 | US |
Child | 13531598 | US |