The subject matter herein generally relates to display systems, and more particularly to an interactive display system and an interactive display method of an electronic device.
In recent years, researches for non-contact type human-machine interactive system (i.e., a three-dimensional interactive system) have been rapidly grown. The three-dimensional interactive system can provide operations more close to actions of a user in daily life, so that the user can have a better controlling experience.
Implementations of the present technology will now be described, by way of example only, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
Several definitions that apply throughout this disclosure will now be presented.
The term “coupled” is defined as connected, whether directly or indirectly through intervening components, and is not necessarily limited to physical connections. The connection can be such that the objects are permanently connected or releasably connected. The term “comprising,” when utilized, means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in the so-described combination, group, series and the like.
The present disclosure is described in relation to an interactive display system and an interactive display method using the same.
The electronic device 1 further includes a storage device 12 providing one or more memory functions, at least one processor 13, and a microphone 14. In at least one embodiment, the interactive display system 10 may include computerized instructions in the form of one or more programs, which are stored in the storage device 12 and executed by the processor 13 to perform operations of the electronic device 1.
The storage device 12 stores one or more programs, such as programs of the operating system, other applications of the electronic device 1, and various kinds of data, such as animated visual images. In some embodiments, the storage device 12 may include a memory of the electronic device 1 and/or an external storage card, such as a memory stick, a smart media card, a compact flash card, or any other type of memory card.
In at least one embodiment, the interactive display system 10 may include one or more modules, for example, a voice obtaining module 101, an identifying module 102, and an executing module 103. In general, the word “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as in an EPROM. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable medium include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
The voice obtaining module 101 is configured to receive the voice commands picked up from the microphone 14. In addition, the voice obtaining module 101 pre-processes the voice commands, such as samples the voice commands, and filters the sampled voice commands by an anti-aliasing bandpass filtering process, and then denoises the voice commands after the anti-aliasing bandpass filtering process.
The identifying module 102 is configured to acquire characteristics of the voice commands, such as a value of short time average magnitude of the voice commands, a value of short time average energy of the voice commands, a value of linear predictive coding coefficient of the voice commands, and a value of short-time spectrum of the voice commands. Additionally, the identifying module 102 compares the characteristics of the voice commands with a sound database stored in the storage device 12 for identifying the voice commands, and consequently obtains an identification result.
The executing module 103 is configured to execute the data of the animated visual images according to the identification result. Optionally, the data of the sound database can also be executed by the executing module 103. In at least one embodiment, the animated visual images at least include a two-dimensional (2D) cartoon or a 3D cartoon, and both the data of the animated visual images and the data of the sound database correspond to the identification result. That is, a mapping relationship is established between both the data of the animated visual images and the data of the sound database and the identification result. Foe example, when the voice commands, such as “open the document”, is received by the voice obtaining module 101, the executing module 103 executes the data of the animated visual images in response to the voice commands “open the document”. Thus, a 2D/3D cartoon may be shown on the touch panel 11 for indicating a double click action on the document. In another example, when voice commands, such as “what it is your name”, is received by the voice obtaining module 101, the executing module 103 executes the data of the animated visual images and the data of the sound database in response to the voice commands “what it is your name”. Thus, a 2D/3D cartoon may be shown on the touch panel 11 for indicating a self-introduction action, and then a name of the 2D/3D cartoon can be outputted by a speaker (not shown) of the electronic device 1. Therefore, the animated visual images and sound effects are interactive with the users.
The electronic device 1 has a first mode and a second mode. Optionally, the interactive display system 10 further includes a mode setting module 104 configured to control the electronic device 1 to enter the first mode or the second mode. When the mode setting module 104 controls the electronic device 1 to enter the first mode, the executing module 103 only executes the data of the animated visual images. When the mode setting module 104 controls the electronic device 1 to enter the second mode, the executing module 103 executes both the data of the animated visual images and the data of the sound database. Thus, the sound effects may be turned off to meet a special environment, such as in a public occasions. In general, two prompt widows may be shown on the touch panel 11 to facilitate selection of the first mode and the second mode.
At block 301, the mode setting module controls the electronic device to enter the first mode or the second module.
At block 302, the voice obtaining module receives the voice commands picked up from the microphone 14 and pre-processes the voice commands.
At block 303, the identifying module acquires the characteristics of the voice commands and compares the characteristics of the voice commands with the sound database for identifying the voice commands, and then the identifying module obtains the identification result.
At block 304, if the electronic device enters the first mode, the executing module only executes the data of the animated visual images, and then a 2D/3D cartoon may be displayed on the electronic device. If the electronic device enters the second mode, the executing module executes both the data of the animated visual images and the data of the sound database, and then a 2D/3D cartoon may be displayed on the electronic device and a sound may be outputted by the electronic device.
In other embodiments, the block 301 can be omitted. At this time, the electronic device enters the second mode by default when the electronic device is turned on.
In summary, the interactive display system 10 includes the voice obtaining module 101 receiving the voice commands, the identifying module 102 comparing the characteristics of the voice commands with the sound database to obtain the identification result, and the executing module 103 executing the data of the animated visual images and the data of the sound database according to the identification result. Thus, the interactive display system 10 is capable of effectively detecting the voice commands of the users, and the animated visual images and the sound effects are interactive with the users, such that an overall controlling performance can be further improved.
The embodiments shown and described above are only examples. Many details are often found in the art such as the other features of the interactive display system and the interactive display method using the same. Therefore, many such details are neither shown nor described. Even though numerous characteristics and advantages of the present technology have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the details, especially in matters of shape, size and arrangement of the parts within the principles of the present disclosure up to, and including the full extent established by the broad general meaning of the terms used in the claims. It will therefore be appreciated that the embodiments described above may be modified within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
201510040421.1 | Jan 2015 | CN | national |