The subject matter herein generally relates to electronic devices, and more particularly to an electronic device for broadcasting a video according to a user's emotive response.
Generally, a user has no control over content of a video. Different kinds of videos cause different emotive responses in a user watching the videos.
Implementations of the present disclosure will now be described, by way of example only, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures and components have not been described in detail so as not to obscure the related relevant feature being described. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features. The description is not to be considered as limiting the scope of the embodiments described herein.
Several definitions that apply throughout this disclosure will now be presented.
The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series and the like.
In general, the word “module” as used hereinafter refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware such as in an erasable-programmable read-only memory (EPROM). It will be appreciated that the modules may comprise connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
The electronic device 1 includes at least a processor 10, a memory 20, a display unit 30, a camera unit 40, and a speech acquisition unit 50. The memory 20 stores a plurality of emotive images. In at least one embodiment, the emotive images respond to an emotive response of the user. For example, when the emotive response of the user is happy, the emotive image may be a laughing cartoon image. The emotive image may be a still image or an animated image, for example.
In at least one embodiment, the display unit 30 is a liquid crystal display for displaying the video. When the electronic device 1 is a smart phone or a tablet computer, the display unit 30 may be a touch display screen.
In at least one embodiment, the camera unit 40 is a CCD camera or a CMOS camera. The camera unit 40 captures gesture images and/or facial expression images of the user. The gesture images and/or the facial expression images may be still images or animated images. In at least one embodiment, the speech acquisition unit 50 is a microphone.
As illustrated in
The detecting module 101 controls the camera unit 40 to detect in real time the gestures and facial expressions of the user during broadcasting of the video.
In at least one embodiment, the camera unit 40 is installed in the electronic device 1. The video may be a television series, a variety show, a documentary, a music video, a news broadcast, or the like. When the electronic device 1 displays the video, the camera unit 40 starts to capture the gestures and facial expressions of the user within a predefined area. The predefined area may be, for example, within five meters in front of the camera unit 40.
In at least one embodiment, the memory 20 has pre-stored therein facial parameters and hand parameters. When the camera unit 40 captures the user, the camera unit 40 detects the gestures and facial expressions of the user according to the pre-stored facial parameters and hand parameters.
In another embodiment, the camera unit 40 may be installed in a mobile terminal 2. When the electronic device 1 is a smart television, the camera unit 40 may be installed in a set-top box.
The confirming module 102 confirms the emotive response of the user according to the captured gestures and facial expressions.
In at least one embodiment, the memory 20 has pre-stored therein a plurality of gesture images and facial expression images of different emotive responses of the user. The gesture images and facial expression images are captured and stored in the memory 20 during habitual use of the camera unit 40 by the user.
During a broadcast of the video by the electronic device 1, when the camera unit 40 captures the gestures and the facial expression of the user, the confirming module 102 determines whether the memory 20 has stored therein matching or similar gestures or facial expressions. When the confirming module 102 determines that the memory 20 has matching or similar gestures or facial expressions, the confirming module 102 confirms the emotive response of the user according to the gesture images and facial expression images. In at least one embodiment, the confirming module 102 uses a parameter comparison method to compare the gesture images and facial expression images captured by the camera unit 40 to the gesture images and facial expression images stored in the memory 20 to determining whether there is a matching or similar image.
In at least one embodiment, the emotive response of the user may be angry, sad, happy, energetic, or low energy. For example, when the gesture images and/or facial expression images of the user match or are similar to the gesture images and/or facial expression images in the memory 20 corresponding to an angry emotive response, then the emotive response of the user is determined to be angry.
The selecting module 103 selects an emotive image from the memory 20 matching the emotive response of the user.
In at least one embodiment, the emotive response of the user corresponds to a plurality of emotive images. When the confirming module 102 confirms the emotive response of the user, the selecting module 103 randomly selects one of the emotive images. For example, when the confirming module 102 confirms the emotive response of the user as angry, the selecting module 103 randomly selects one of the emotive images matching the angry emotive response.
In another embodiment, the electronic device 1 provides an emotive image management interface 110 (shown in
The obtaining module 104 obtains a position of the display unit 30 where the emotive image is displayed, a broadcast time of the video when the emotive image is displayed, a local date and time, an account name of the user, and an IP address of the electronic device 1.
In at least one embodiment, when the selecting module 103 selects the emotive image matching the emotive response of the user, the emotive image is randomly display on the display unit 30, and the obtaining module 104 obtains the position of the display unit 30 where the emotive image is displayed.
In another embodiment, when the emotive image is displayed on the display unit 30, the user may control the position of the emotive image. For example, when the electronic device 1 is a smart television, the user can use the remote control or the mobile terminal 2 of the smart television to control the position of the emotive image on the display unit 30. When the electronic device 1 is a smart phone, the user can use the touch screen to control the position of the emotive image.
The broadcast time of the video when the emotive image is displayed is obtained according to a playback progress of the video. The local date and time and the IP address of the electronic device 1 is obtained according to system information. The account name is obtained according to a user login system.
The uploading module 105 uploads the emotive image to a server 3.
In at least one embodiment, when the electronic device 1 broadcasts the video, the electronic device 1 communicates with a server 3 of a provider of the video. The provider of the video may be a television station or a video website. In detail, when the uploading module 105 uploads the emotive image to the server 3, the uploading module 105 further uploads the position of the display unit 30 where the emotive image is displayed, the broadcast time of the video when the emotive image is displayed, the local date and time, the account name of the user, and the IP address of the electronic device 1 to the server 3. Thus, an emotive image record includes the position of the display unit 30 where the emotive image is displayed, the broadcast time of the video when the emotive image is displayed, the local date and time, the account name of the user, and the IP address of the electronic device 1.
The broadcasting module 106 obtains from the server 3 the emotive image of the video viewed by the user and broadcasts the video and the emotive image together on the display unit 30.
Referring to
The speech acquisition module 107 responds to voice commands of the user to control the speech acquisition unit 107 to obtain voice input from the user.
In at least one embodiment, the speech acquisition unit 50 is installed in the electronic device 1. In order to avoid obtaining unnecessary voice input, the speech acquisition unit 50 is in a turned off state by default. When the user needs input voice input, the user can manually turn on the speech acquisition unit 50 to send a speech acquisition command. The speech acquisition unit 50 responds to the speech acquisition commands to begin to acquire voice input of the user.
The converting module 108 converts the voice input obtained by the speech acquisition unit 50 into text data.
The obtaining module 104 obtains the position of the emotive image and the text data on the display unit 30, the broadcast time of the video when the emotive image and text data are displayed, a local date and time, an account name of the user, and an IP address of the electronic device 1.
The broadcasting module 106 broadcasts the emotive image and text data on the display unit 30. In detail, the broadcasting module 106, when the electronic device 1 broadcasts the video again, broadcasts the emotive image and the text data in the same position and records the local date and time obtained by the obtaining module 104, the account name, and the IP address of the electronic device 1.
Furthermore, the memory 20 further stores a plurality of advertisements. Broadcasting of each advertisement depends on the emotive response of the user.
The searching module 109 searches the memory 20 for an advertisement matching the emotive response of the user. For example, when the emotive response of the user is sad, the searching module 109 searches for an advertisement for comforting the user, such as a safety advertisement, an insurance advertisement, or the like. When the emotive response of the user is happy, the searching module 109 searches for a beer advertisement, for example.
When the emotive image uploaded by the user is finished displaying, the broadcasting module 106 broadcasts the advertisement on the display unit 30.
Referring to
At block S101, gestures and facial expressions of a user are captured in real time when the electronic device 1 broadcasts a video.
At block S102, an emotive response of the user is determined according to the gestures and facial expressions of the user.
During a broadcast of the video by the electronic device 1, when the camera unit 40 captures the gestures and the facial expression of the user, whether the memory 20 has stored therein matching or similar gestures or facial expressions is determined. When it is determined that the memory 20 has matching or similar gestures or facial expressions, the emotive response of the user is confirmed according to the gesture images and facial expression images.
At block S103, an emotive image from a plurality of emotive images stored in the memory 20 matching the emotive response of the user is selected.
In at least one embodiment, the emotive response of the user corresponds to a plurality of emotive images. When the emotive response of the user is confirmed, one of the emotive images is selected randomly.
At block S104, a position of the emotive image on the display unit 30, a broadcast time of the video when the emotive image is displayed, a local date and time, an account name of the user, and an IP address of the electronic device 1 are obtained.
At block S105, the emotive image is uploaded to a server 3.
At block S106, the server 3 obtains the emotive image of the video and broadcasts the video and the emotive image together on the display unit 30.
At block S107, the memory 20 is searched for an advertisement matching the emotive response of the user.
At block S108, when the emotive image uploaded by the user is finished being displayed, the advertisement is broadcasted on the display unit 30.
In at least one embodiment, when the electronic device 1 broadcasts the advertisement, broadcasting of the video is temporarily halted, and the advertisement is displayed in a full screen mode. In another embodiment, when the electronic device 1 broadcasts the advertisement, broadcasting of the video is not halted, and the advertisement is broadcast in a smaller window.
In at least one embodiment, when the electronic device 1 broadcasts the video, the electronic device 1 responds to a speech acquisition command of the user and begins to acquire speech input. The speech input is converted into text data, and the emotive image and the text data are broadcasted onto the display unit 30.
The embodiments shown and described above are only examples. Even though numerous characteristics and advantages of the present technology have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, including in matters of shape, size and arrangement of the parts within the principles of the present disclosure up to, and including, the full extent established by the broad general meaning of the terms used in the claims.
Number | Date | Country | |
---|---|---|---|
62571802 | Oct 2017 | US |