The subject matter herein generally relates to data processing technologies, and particularly relates to a text message processing device and a text message processing method.
Social media software, such as Wechat, QQ, can only receive a text message or a voice message once, and the receiver must look at a message window or click voice message in the message window to read the content. When a sender sends a text message, and the receiver is in inconvenient situation, such as driving a car, some important message may be missed.
Implementations of the present technology will now be described, by way of example only, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the exemplary embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features. The description is not to be considered as limiting the scope of the exemplary embodiments described herein.
Several definitions that apply throughout this disclosure will now be presented.
The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
At block 201, receiving a text message and recording the sender information, the sender information can comprise sender name and image of sender (head portrait).
At block 202, searching for individual voice data of the sender in the voice synthesis database 31.
At block 203, determining whether there is the individual voice data of the sender in the voice synthesis database 31; if yes, perform block 204, if not, perform block 205.
At block 204, recording individual voice data for the sender. In exemplary embodiment, the individual voice data can comprise basic language unit pronunciation of each language, for example, the basic language unit pronunciation of Chinese language comprises pronunciations for the 21 initial consonants, 37 simple or compound vowels, with 5 tones.
At block 205, converting the text message to a voice message using the individual voice data.
At block 206, playing the voice message.
At block 301, identifying the sender.
At block 302, recording voice information of the sender for a length of specified text by reading the length of specified text.
At block 303, extracting voice features of the sender from the voice information.
At block 304, comparing the voice features of the sender with voice features of an acquiescent voice to get voice feature difference.
At block 305, modifying the voice features of the acquiescent voice using the voice feature difference and generating individual voice data of the sender.
At block 401, recording basic language unit pronunciation of a language, for example, the basic language unit pronunciation of Chinese language comprises pronunciations for the 21 initial consonants, 37 simple or compound vowels, with 5 tones.
At block 402, storing the basic language unit pronunciation of the language as the individual voice data of the sender.
The text message processing method can further comprise setting a mode for playing. The setting playing mode comprises opening/closing automatic voice playing switch and selecting a speaker who's voice is used to convert the text message. The text message processing method can further comprise making a determination, between block 201 and block 202, as to whether the automatic voice playing switch is opened; if yes, perform block 202, if not, the text message is deemed not convertible to a voice message and the text message processing method is ended.
The speaker who's voice is used to convert the text message can be the sender or the acquiescent voice. The acquiescent voice is stored in the voice synthesis database 31. The acquiescent voice comprises basic language unit pronunciation of each language with a specific voice feature. When the text message is converted to voice message, the pronunciations corresponding to each part of language are put together consistently with a specific speed to form the voice message. The acquiescent voice can be mechanized voice, animated character voice, or a famous person's voice.
The text message processing method can further comprise storing the text message and voice message and displaying the text message and voice message in the chat window.
The text message processing device 100 can further comprise an identifying module 57, an extracting module 58, a comparing module 59, and a generating module 61. The identifying module 57 is configured to identify the sender. The recording module 54 is further configured to record voice information of the sender when reading a length of specified text. The extracting module 58 is configured to extract voice features of the sender from the voice information. The comparing module 59 is configured to compare the voice features of the sender with voice features of an acquiescent voice to obtain voice feature difference. The generating module 61 is configured to modify the voice features of the acquiescent voice using the voice feature difference and generate individual voice data of the sender.
The recording module 54 is further configured to record the basic language unit pronunciation of a language. The text message processing device 100 can further comprise a storing module 63, which is configured to store the basic language unit pronunciation of a language as the individual voice data of the sender.
The text message processing device 100 can further comprise a setting module 65, which is configured to set a playing mode. The playing mode can comprise opening/closing automatic voice playing switch and selecting speaker who's voice data used to convert the text message. When the automatic voice playing switch is opened, the text message can be converted to the voice message.
The person speaking, who's voice data used to convert the text message, can be the sender or the acquiescent voice. The acquiescent voice is stored in the voice synthesis database 31. The acquiescent voice comprises basic language unit pronunciation of each language with a specific voice feature. When the text message is converted to voice message, the pronunciations corresponding to each part of language are consistently put together with a specific speed to form the voice message. The acquiescent voice can be mechanized voice, animated character voice, or a famous person's voice.
The storing module 65 is further configured to store the text message and the voice message, and also display same in the chat interface.
The logic instruction stored in the memory 73 can be part of other software or used as an independent product. The memory 73 can store software programs or routine, or computer-performable routines, such as the routine instructions or modules corresponding to the text message processing method disclosed in the exemplary embodiment. The processor 71 performs function application and data processing by operating the software routine, instruction, and modules.
The memory 73 can comprise routine storing area and data storing area.
The routine storing area is configured to store operating system and application routine. The data storing area is configured to store data generated by the text message processing device. The memory 73 can comprise USB flash disk, mobile HDD, Read-Only Memory, Random Access Memory, diskette, and optical disk.
The text message processing device 100 can comprise a mobile terminal and a server. The server comprises the processor and the memory. The mobile terminal can be a mobile phone or a tablet computer. The processer loads and executes at least one instruction to achieve blocks or steps of the text message processing method of
In order to record individual voice data for the sender, the at least one instruction loaded by the processor identifies the sender, records voice information of the sender for a length of specified text by reading the length of specified text. Voice features of the sender are extracted from the voice information and the voice features of the sender are compared with voice features of an acquiescent voice to obtain voice feature differences. Voice features of the acquiescent voice can be modified using the voice feature differences, to generating individual voice data of the sender.
In order to record individual voice data for the sender, the at least one instruction loaded by the processor further records basic language unit pronunciation of a language and stores the basic language unit pronunciation of the language as the individual voice data of the sender.
The mobile terminal can further execute or set a playing mode, and send the set-mode data to the server. The setting playing mode comprises opening/closing automatic voice playing switch and selecting speaker who's voice data is used to convert the text message. When the automatic voice playing switch is opened, the text message can be converted to the voice message.
The processor can further execute storage of the text message and voice message and display of the text message and voice message in the chat window of in the mobile terminal.
The text message processing device 100 in a second embodiment can be a mobile terminal. The mobile phone comprises the processor and the memory. The mobile terminal can be a mobile phone or a tablet computer. The processer can execute at least one instruction to achieve blocks or steps of the text message processing method shown in
In order to record data of individual voice for the sender, the at least one instruction loaded by the processor identifies the sender, and records voice information of the sender for a length of specified text by reading the length of specified text. Voice features of the sender are extracted from the voice information and compared with the voice features of an acquiescent voice to get voice feature differences. The voice features of the acquiescent voice are modified using the voice feature differences, to generate data of individual voice of the sender.
In order to record data of individual voice for the sender, the at least one instruction loaded by the processor also records basic language unit pronunciation of a language and stores the basic language unit pronunciation of the language as data of the individual voice of the sender.
The at least one instruction loaded by the processor can further set a playing mode, and send the set-mode data to the server. The setting playing mode comprises opening/closing automatic voice playing switch and selecting the one person who's voice data is used to convert the text message. When the automatic voice playing switch is opened, the text message can be converted to the voice message.
The at least one instruction loaded by the processor can further store the text message and voice message and display the text message and voice message in the screen of the mobile terminal.
The exemplary embodiments shown and described above are only examples.
Many details are often found in the art such as the other features of text message processing device and method. Therefore, many such details are neither shown nor described. Even though numerous characteristics and advantages of the present technology have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, especially in matters of shape, size, and arrangement of the parts within the principles of the present disclosure, up to and including the full extent established by the broad general meaning of the terms used in the claims. It will therefore be appreciated that the exemplary embodiments described above may be modified within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
106144287 | Dec 2017 | TW | national |