The present disclosure relates to a reading method based on the terminal.
This section provides background information related to the present disclosure which is not necessarily prior art.
With the development of technology, a variety of electronic terminals such as mobile phones, panel personal computers, etc. are widely used and have become an important tool in people's daily life and work. With the increasing popularity of the Internet, users are accustomed to having a variety of terminals to browse news, find information, read e-books, etc. However, for visually impaired people, there are still many difficulties in using a browser of a terminal to browse webpages or read e-books, etc. according to an existing reading method based on terminal.
This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
Overcoming the shortcomings of the existing browser reading technology and provide a new reading method based on the terminal will make it possible for visually impaired people to read audibly.
The present disclosure is directed to various embodiments.
The present disclosure provides a reading method based on a terminal. The terminal includes a touch screen. The reading method includes detecting a touching operation on the touch screen to determine a display object corresponding to the touching operation. The method also includes extracting text data corresponding to the display object, converting the extracted text data into voice data, and playing the voice data.
The present disclosure also provides a terminal including a touch screen. The terminal further includes: a detection module configured to detect a touching operation on the touch screen to determine a display object corresponding to the touching operation. A text data extraction module is configured to extract text data corresponding to the display object. A conversion module is configured to convert the extracted text data into voice data. A playing module is configured to play the voice data.
The present disclosure also provides a computer-readable storage medium including a set of instructions for performing reading. The set of instructions directs at least one processor to perform acts of determining a display object corresponding to an operation on a terminal device, obtaining voice data corresponding to the display object, and playing the voice data.
In the reading method based on terminal and the corresponding terminal of the present disclosure, a touching operation on the touch screen of the terminal can be detected to determine a display object. Corresponding text data can be generated, and then the generated text data can be converted into voice data for being played. This allows visually impaired users to read audibly. This can broaden and simplify the capabilities of the application and it is convenient to use.
The above description is only an overview of the technical solution of the present disclosure, in order to better understand the technical solution of the present disclosure and implement the technical solution according to contents of the specification, and in order to make the above and other objects, features and advantages of the present disclosure clearer, the present disclosure will be described in further detail hereinafter with reference to embodiment and accompanying drawings.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
Example embodiments will now be described more fully with reference to the accompanying drawings.
For simplicity and illustrative purposes, the present disclosure is described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to these specific details. In other instances, some methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure. Throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on.
As shown in
Block S1: detecting a touching operation on a touch screen of a terminal, to determine a display object corresponding to the touching operation. Generally, the touch screen of the terminal can display a plurality of display objects. These display objects can mainly be divided into two types, one type is function operation icon and another type is specific text. Taking an open browser on the terminal as an example, when the browser is displayed, both function operation icons and specific text will be displayed. The function operation icons of the browser can correspond to operations of the browser itself; such as, switching windows, adding bookmarks, opening corresponding webpages from the start page, etc. The specific text of the browser is usually specific text in webpages; for example, specific news, novels, etc.
Block S2: extracting text data corresponding to the display object.
Block S21: mapping a function of the display object; and
Block S22: according to the function of the display object, editing corresponding function text data to obtain text data corresponding to the display object.
When the touched display object in the touching operation is a function operation icon, such as, a function operation icon “refresh webpage” of the browser, the terminal will determine function of the function operation icon and then edit corresponding text data. That is, edit text data corresponds to “refresh webpage” which is taken as a display object.
Block S26: activating a text selection program; and
Block S27: recognizing the text content of the display object and combining related texts to generate corresponding text data.
When the touched display object in the touching operation is specific content, for example, content of news or content of a novel, the text selection program of the terminal can be used to recognize corresponding text content thereby generating corresponding text data.
Block S3: converting the extracted text data into voice data. In various embodiments, the terminal can send the extracted text data to a server side, the server side recognizes the text data, thereby generating corresponding voice data; then, the terminal receives the voice data returned from the server side after the server side recognizes the text data. One skilled in the art will understand that, in the present disclosure, the extracted text data can be converted into voice data directly in the terminal.
Block S4: playing the voice data.
The present disclosure also provides a corresponding terminal.
The detection module 110 is configured to detect a touching operation on a touch screen of the terminal 100, to determine a display object corresponding to the touching operation. The text data extraction module 120 is configured to extract text data corresponding to the display object. The conversion module 130 is configured to convert the extracted text data into voice data. The playing module 140 is configured to play the voice data. The display objects on the touch screen can be divided into two types: one type is function operation icon and another type is specific text.
Further, as shown in
In the reading method based on terminal and the corresponding terminal of the present disclosure, a touching operation on the touch screen of the terminal can be detected to determine a display object, corresponding text data can be generated, and then the generated text date can be converted into voice data for being played, thereby allowing visually impaired people to read audibly, this can broaden and simplify the capabilities of the application and it is convenient to use.
The methods, modules, units and terminals described herein may be implemented by hardware, machine-readable instructions or a combination of hardware and machine-readable instructions. Machine-readable instructions used in the examples disclosed herein may be stored in a storage medium readable by multiple processors, such as a hard drive, CD-ROM, DVD, compact disk, floppy disk, magnetic tape drive, RAM, ROM or other proper storage device. Or, at least part of the machine-readable instructions may be substituted by specific-purpose hardware, such as custom integrated circuits, gate array, FPGA, PLD, specific-purpose computers and so on.
A machine-readable storage medium is also provided to store instructions to cause a machine to execute a process as described according to examples herein. In one example, there is provided a system or apparatus having a storage medium that stores machine-readable program codes for implementing functions of any of the above examples and that may cause the system or the apparatus (or CPU or MPU) to read and execute the program codes stored in the storage medium.
In this situation, the program codes read from the storage medium may implement any one of the above examples.
The storage medium for storing the program codes may include a floppy disk, hard drive, magneto-optical disk, compact disk (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tape drive, Flash card, ROM and so on. The program code may be downloaded from a server computer via a communication network.
It should be noted that, alternatively to the program codes being executed by a computer, at least part of the operations performed by the program codes may be implemented by an operation system running in a computer following instructions based on the program codes to implement any of the above examples.
In addition, the program codes implemented from a storage medium are written in a storage in an extension board inserted in the computer or in a storage in an extension unit connected to the computer. In this example, a CPU in the extension board or the extension unit executes at least part of the operations according to the instructions based on the program codes to implement any of the above examples.
Although described specifically throughout the entirety of the instant disclosure, representative examples of the present disclosure have utility over a wide range of applications, and the above discussion is not intended and should not be construed to be limiting, but is offered as an illustrative discussion of aspects of the disclosure.
What have been described and illustrated herein are examples along with some variations. The terms, descriptions, and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
Reference throughout this specification to “one embodiment,” “an embodiment,” “specific embodiment,” or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with an embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment,” “in a specific embodiment,” or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Number | Date | Country | Kind |
---|---|---|---|
201210305162.7 | Aug 2012 | CN | national |
This application is a continuation of International Application No. PCT/CN2013/081932 filed on Aug. 21, 2013. This application claims the benefit and priority of Chinese Application No. 201210305162.7, filed Aug. 24, 2012. The entire disclosures of each of the above applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2013/081932 | Aug 2013 | US |
Child | 14623672 | US |