This application claims the benefit of Taiwan application Serial No. 107143479, filed Dec. 4, 2018, the disclosure of which is incorporated by reference herein in its entirety.
The disclosure relates to a control system and a control method for a social network.
While immersed in busy work schedules, modern people still have every intention of attending to the daily lives of family members. The elderlies and children are particularly the ones that need to be cared for. If environments or physical and mental conditions of family members can be automatically detected and made known to other family members in a social network, interactions between the two parties can be promoted.
However, there are numerous requirements that need to be taken into account for the development of the above social network, for example, the privacy of members, whether the members feel bothered, and how information contents are displayed—these are some of the factors decisive on current development directions.
The disclosure is directed to a control system and a control method for a social network.
According to one embodiment of the disclosure, a control method for a social network is provided. The control method includes obtaining detection information, analyzing status information of at least one social member according to the detection information, condensing the status information according to a time interval to obtain condensed information, summarizing the condensed information according to a summary priority score to obtain summary information, and displaying the summary information.
According to another embodiment of the disclosure, a control system for a social network is provided. The control system includes at least one detection unit, an analysis unit, a condensation unit, a summary unit and a display unit. The detection unit obtains detection information. The analyzing unit analyzes status information of at least one social member according to the detection information. The condensation unit condenses the status information according to a time interval to obtain condensed information. The summary unit summarizes the condensed information according to a summary priority score to obtain summary information. The display unit displays the summary information.
Embodiments are described in detail with the accompanying drawings below to better understand the above and other aspects of the disclosure.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
Various embodiments are given below to describe a control system and a control method for a social network of the disclosure. In the present disclosure, a social member in the social network can present records of activities (including emotion states, lifestyles, special events, member conversations, and/or virtual interactions) by means of a virtual character in a virtual scene presented by multimedia. Further, a user can present information using a virtualized and/or metaphorical multimedia selected as desired, and present condensed information by a non-linearly scaled time axis.
Refer to
Refer to
The components above can be integrated in the same electronic device, or be separately provided in different electronic devices. For example, the detection unit 110 can be discretely configured at different locations, the analysis unit 120, the concentration unit 130, the summary unit 140, the correction unit 160 and the storage unit 170 can be provided in a same host, and the display unit 150 can be a screen of a smartphone of a user.
Operations of the components above and various functions of the social network 9000 are described in detail with the flowchart below. In step S110 of
The above detection unit 110 can also be a portable detector. When the detection unit 110 is placed in an environment, the detection information S1 can be used to determine details in the ambient environment. For example, the detection information S1 can be used to identify feature objects (e.g., television, bed or dining table) in the environment by identifying objects using the image recognition technology. Alternatively, the detection information S1 can be wireless signals of home appliances, and details of a located environment can be identified using home appliances having wireless communication capabilities.
In one embodiment, the social network 9000 can perform object identification or human face recognition, and present the identified object or social network in form of a predetermined virtual image on the display unit 150.
The detection unit 110 can also be worn on an autonomous mobile device. The autonomous mobile device can follow the movement of an object or a social member by an object tracking technique, and can also perform an autonomous movement by using the simultaneous localization and mapping (SLAM) technology. When the autonomous mobile device moves, the detection unit 110 can identify the ambient environment, and display the identified object or social member in form of a predetermined virtual image on the display unit 150. With the detection of the detection unit 110, a detected item can be presented in form of simulation in a virtual environment of the social network 9000.
The detection unit 110 can be a contact type or a non-contact type, and is, for example, a microphone, a video camera, an infrared temperature sensor, a humidity sensor, an ambient light sensor, a proximity sensor, a gravity sensor, an accelerometer sensor, a magnetism sensor, a gyroscope, a GPS sensor, a fingerprint sensor, a Hall sensor, a barometer, a heartrate sensor, a blood oxygen sensor, an infrared sensor or a Wi-Fi transceiving module.
The detection unit 110 can also be directly carried on various smart electronic devices, such as smart bracelets, smart earphones, smart glasses, smart watches, smart garments, smart rings, smart socks, smart shoes or heartbeat sensing belts.
Further, the detection unit 110 can also be a part of an electronic device, such as a part of a smart television, a surveillance camera, a game machine, a networked refrigerator or an antitheft system.
In step S120 in
The status information S2 may be categorized into personal information, space information and/or special events. The personal information includes physiological states (physical states and/or mental states) and/or records of activity (personal activities and interaction activities). The space information includes environment states (temperature and/or humidity) and/or event and object records (turning on of a television and/or ringing doorbells). Special events are a general term for, for example, emergencies such as sudden shouting for help, earthquakes and abnormal sounds, and environmental events and abnormal events. The personal information and space information can be browsed on the social network 9000 by a user, and the special events can be notified to the user initiatively by the social network 9000.
More specifically, the detection unit 110 can use a wearable device to collect the temperatures, blood oxygen levels, heartbeats, calories burned, activities, locations and/or sleep information of social members as the detection information S1, use an infrared sensor to collect the temperatures of social members as the detection information S1, or use the non-contact radio sensing technology to collect human heartbeats as the detection information S1. The analysis unit 120 then analyzes the detection information S1 to obtain physical states as the status information S2. Means for the above detection and analysis can be real-time, non-contact, long-term and/or continuous detection and analysis, and functions including sensing, signal processing and/or wireless data transmission can be integrated by a smartphone.
Further, the analysis unit 120 can also analyze the detection information S1 in form of a speech to obtain physical states as the status information S2. For example, after the detection information S1 such as coughing, sneezing, snoring and/or sleep-talking is inputted into the analysis unit 120, the analysis unit 120 can analyze the detection information S1 according to frequency changes in events of snoring, teeth-grinding and/or coughing to obtain sleeping states as the status information S2.
Further, with images or audios as the detection information S1, the analysis unit 120 can also perform analysis to obtain emotion states (happiness, surprise, anger, dislike, sorrow, fear or neutral) of social members. The analysis unit 120 can identify current expression states by images and the human expression detection technology. After a social member is recognized by the speaker recognition technology, the analysis unit 120 can perform verbal sound analysis of voice emotion detection, emotion signification term detection and/or non-verbal sound emotion (e.g., laughter) detection, or consolidated processing and output can be performed by incorporating results of images and sounds. Alternatively, the analysis unit 120 can perform analysis on psychological related events such as self-talking and repeated conversation contents by using the detection information S1 in form of verbal sounds to obtain mental states as the status information S2.
The above detection and analysis operations can serve as warnings for states of dementia or abnormal behaviors. For example, through the detection information S1 including facial expressions, eye expressions, sounds and behaviors of a patient and/or walking postures, the analysis unit 120 can also perform analysis and obtain the status information S2 indicating any potential aggressive behavior. Taking sound detection for example, orientated by caretaking, the detection information S1 including verbal features, habits, use of terms and/or conversation contents of a social member is detected, and the analysis unit 120 then performs analysis by means of machine learning or deep learning algorithms to obtain the status information S2 indicating abnormal behaviors.
Further, the detection unit 110 can detect, via indoor positioning technology and activity analysis technology, information of a current location (e.g., dining room, bedroom, living room, study room or hallway) and activity information (e.g., dining, sleeping, watching television, reading, or falling on the ground) of a social member to obtain the detection information S1; the analysis unit 120 then uses machine learning or deep learning algorithms to further perform analysis according to the detection information S1 and time information to obtain the status information S2 indicating that the social member is currently, for example, dining.
Further, the detection unit 110 can obtain, from a third party, weather information or detect events of environment conditions (e.g., temperature, humidity, weather conditions, sound conditions, air quality and/or water level detection), sounds of glass, sounds of fireworks (gunshots), loud noises, high carbon monoxide levels and/or drowning, so as to further obtain the detection information S1 of the environment. The analysis unit 120 then uses machine learning or deep learning algorithms to perform analysis according to the detection information S1 to obtain the status information S2 of an environment where the social member is located.
Moreover, the detection unit 110 can obtain the detection information S1 in a streaming video/audio, and the analysis unit 120 then determines, for example, verbal activity sections and types, speaking scenarios (talking on the phone, conversing or non-conversing), the talking person, the duration, and/or the occurrence frequency of key terms according to the detection information S1, so as to generate consolidated status information S2 indicating the physiological states of a social member.
Alternatively, the analysis unit 120 can perform analysis according to contents, cries, shouts or calls in the detection information S1 to obtain the status information S2 indicating an argument event.
In step S121, if the analysis unit 120 determines that the status information S2 includes a special event, a warning is issued in step S122.
Next, in step S130 in
Refer to
Refer to
Refer to
In step S140 in
S(S3)=Score(A(S3),H(S3),PD) (1)
The data characteristic (i.e. A(S3)) is, for example, a time length or frequency. The lookup frequency (i.e. H(S3)) is, for example, a reading time or reading frequency obtained from analyzing browsing logs. Refer to
As shown in equation (2) above, when the entire condensed information S3 is not read, the lookup preference (H(S3)) is 0; when the condensed information S3 is read and a part of the contents is read or selected (e.g., three out of ten sets of contents are selected), the lookup preference (i.e., H(S3)) is 1×μ(x); when the condensed information S3 is read and a part of the contents is skipped (e.g., seven out of ten sets of contents are skipped), the lookup preference (i.e., H(S3)) is −1×μ(x); when the condensed information S3 is repeatedly read, the lookup preferences (i.e., H(S3)ti) of the reads are summed up as ΣH(S3)ti.
The type preference (i.e., PD) can be presented by a weight and be calculated as equation (3) below:
In equation (3), WL is the weight, S3i is data, Num(S3i) is the data count, Wi′ is a historical weight, a is an adjusting parameter, and i is the data type.
represents the significant level of the ith set of data, and serves as an adjusting parameter for a historical weight (i.e. Wi) to obtain an updated weight (Wi).
The summary priority score S(S3) is calculated according to, for example, equation (4) below:
In equation (4), the relationship among α, β and ≡ is, for example but not limited to, α«β»γ, where α is a type priority parameter, β is a frequency priority parameter and γ is a length priority parameter. A(S3)=β×F(S3)+γ×L(S3), where F(S3) is the frequency and L(S3) is the length.
As described above, with the summarization performed by the summary unit 140, the summary information S4 can reflect the reading habit and preference of the user, so as to provide information meeting requirements of the user.
In step S150 in
With the above embodiments, the social network 9000 can initiatively detect, in non-contact and interference-free situations, conditions of social members, and present the conditions of the social members through the condensed information S3 and the summary information S4 in the social network 9000. A virtual character can present activities of the social members by multimedia in virtual scenes, and a user can set a virtual and/or metaphorical multimedia as desired to realize such presentation.
In step S160 in
With the embodiments above, the social network 9000 is capable of presenting activities (including emotion states, lifestyles, special events, member conversations and/or virtual interactions) of social members in virtual scenes presented by multimedia. These activities can present condensed information thereof by a non-linearly scaled time axis, and can provide summary information according to user preferences.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
107143479 | Dec 2018 | TW | national |