This disclosure generally relates to education, and, in particular, to systems and methods for evaluating and improving reading skills in real time, for example, via a personal communications device.
Learning to read is a critical development skill for children. In some instances, a child may be apart from a parent for a period of time. For example, a child may live apart from a parent as a result of a parent's military service, a parent's job that requires travel, or parents who are divorced. In these instances, it can be difficult for families to be engaged with each other daily in real time, and it may be difficult for a parent to assist a child in developing their reading skills.
In other circumstances, there is also a growing demand for children and adults to learn to read in a different language. However, finding in-person access to a foreign language teacher may be difficult, in part because many capable foreign language teachers may be located abroad or in communities that are remote from the children and adults desiring to learn the foreign language.
The drawings are provided for purposes of illustration only and merely depict example embodiments of the disclosure. The drawings are provided to facilitate understanding of the disclosure and shall not be deemed to limit the breadth, scope, or applicability of the disclosure. Various embodiments may utilize elements or components other than those illustrated in the drawings, and some elements and/or components may not be present in various embodiments. The use of singular terminology to describe a component or element may, depending on the context, encompass a plural number of such components or elements and vice versa.
This disclosure will now provide a more detailed and specific description that will refer to the accompanying drawings. The drawings and specific descriptions of the drawings, as well as any specific or other embodiments discussed, are intended to be read in conjunction with the entirety of this disclosure.
This disclosure relates to systems and methods evaluating and improving reading skills in real time in accordance with certain embodiments of the disclosure. In one embodiment, a system for evaluating and improving reading skills in real time can be provided. In another embodiment, a method for evaluating and improving reading skills in real time can be provided.
In one or more embodiments, the method for evaluating and improving reading skills in real-time may include receiving a user selection of an electronic book in an application executing on a user device associated with a first user, opening the user selection of the electronic book in the application, initiating a video call or an audio call with a second user in the application, receiving from the first user one or more first spoken words associated with a first reading content of the electronic book, transcribing the one or more first spoken words to first text via the application, determining if the first text matches the first reading content of the electronic book, and transmitting the determination of whether the first text matches the first reading content of the electronic book to the first user and the second user.
In one or more embodiments, the first user is one of a child or a language student, and wherein the second user is one of a parent, a teacher, or a language-fluent person.
In one or more embodiments, the method may include receiving from the first user one or more second spoken words associated with a second reading content of the electronic book, transcribing the one or more second spoken words to second text via the application, determining if the second text matches the second reading content of the electronic book, and transmitting the determination of whether the second text matches the second reading content of the electronic book to the first user and the second user.
In one or more embodiments, the method may include initiating the video call or the audio call with the second user and a third user via the application.
In one or more embodiments, the application is configured to highlight the first reading content of the electronic book as the one or more first spoken words are received from the first user.
In one or more embodiments, receiving a user selection of the electronic book in the application further includes requesting a list of electronic books via a search function in the application, receiving a list of electronic books via the search function, and selecting the electronic book from the list of electronic books. In one or more embodiments, the search function is configured to allow the first user or the second user to search for the list of electronic books by at least one of a book title, an author name, or a keyword.
In one or more embodiments, a system for evaluating and improving reading skills in real-time may include a user device associated with a first user executing an application configured to receive a user selection of an electronic book, open the user selection of the electronic book, initiate a video call or an audio call with a second user, receive from the first user one or more first spoken words associated with a first reading content of the electronic book, transcribe the one or more first spoken words to first text, determine if the first text matches the first reading content of the electronic book, and transmit the determination of whether the first text matches the first reading content of the electronic book to the first user and the second user.
In one or more embodiments, the first user is one of a child or a language student, and wherein the second user is one of a parent, a teacher, or a language-fluent person.
In one or more embodiments, the application is further configured to receive from the first user one or more second spoken words associated with a second reading content of the electronic book, transcribe the one or more second spoken words to second text, determine if the second text matches the second reading content of the electronic book, and transmit the determination of whether the second text matches the second reading content of the electronic book to the first user and the second user.
In one or more embodiments, the application is further configured to initiate the video call or the audio call with the second user and a third user.
In one or more embodiments, the application is further configured to highlight the first reading content of the electronic book as the one or more first spoken words are received from the first user.
In one or more embodiments, the application is further configured to request a list of electronic books via a search function, receive a list of electronic books via the search function, and select the electronic book from the list of electronic books. In one or more embodiments, the search function is configured to allow the first user or the second user to search for the list of electronic books by at least one of a book title, an author name, or a keyword.
In one or more embodiments, a device may comprise a memory that stores computer-executable instructions, and a processor configured to access the memory and execute the computer-executable instructions to at least receive a user selection of an electronic book in an application associated with a first user, open the user selection of the electronic book in the application, initiate a video call or an audio call with a second user in the application, receive from the first user one or more first spoken words associated with a first reading content of the electronic book, transcribe the one or more first spoken words to first text via the application, determine if the first text matches the first reading content of the electronic book, and transmit the determination of whether the first text matches the first reading content of the electronic book to the first user and the second user.
In one or more embodiments, the first user is one of a child or a language student, and wherein the second user is one of a parent, a teacher, or a language-fluent person.
In one or more embodiments, the processor is further configured to access the memory and execute additional computer-executable instructions to at least receive from the first user one or more second spoken words associated with a second reading content of the electronic book transcribe the one or more second spoken words to second text via the application, determine if the second text matches the second reading content of the electronic book, and transmit the determination of whether the second text matches the second reading content of the electronic book to the first user and the second user.
In one or more embodiments, the processor is further configured to access the memory and execute additional computer-executable instructions to at least initiate the video call or the audio call with the second user and a third user via the application.
In one or more embodiments, the application is configured to highlight the first reading content of the electronic book as the one or more first spoken words are received from the first user.
In one or more embodiments, the processor is further configured to access the memory and execute additional computer-executable instructions to at least request a list of electronic books via a search function in the application, receive a list of electronic books via the search function, and select the electronic book from the list of electronic books.
Turning to the figures,
In some embodiments, the one or more first user device(s) 120 and the one or more second user device(s) 140 can include one or more computer systems similar to that of the example machine of
Any of the first user device(s) 120 (e.g., 122 or 124) and any of the second user device(s) 140 (e.g., 142 or 144) may be configured to communicate with each other and/or any other component of the network environment 100 directly and/or via the one or more communications network(s) 130 and 135, wirelessly or wired.
As used herein, the term “Internet of Things (IoT) device” is used to refer to any object (e.g., an appliance, a sensor, etc.) that has an addressable interface (e.g., an Internet protocol (IP) address, a Bluetooth identifier (ID), a near-field communication (NFC) ID, etc.) and can transmit information to one or more other devices over a wired or wireless connection. An IoT device may have a passive communication interface, such as a quick response (QR) code, a radio-frequency identification (RFID) tag, an NFC tag, or the like, or an active communication interface, such as a modem, a transceiver, a transmitter-receiver, or the like. An IoT device can have a particular set of attributes (e.g., a device state or status, such as whether the IoT device is on or off, open or closed, idle or active, available for task execution or busy, and so on, a cooling or heating function, an environmental monitoring or recording function, a light-emitting function, a sound-emitting function, etc.) that can be embedded in and/or controlled/monitored by a central processing unit (CPU), microprocessor, ASIC, or the like, and configured for connection to an IoT network such as a local ad-hoc network or the Internet. For example, IoT devices may include, but are not limited to, cell phones, desktop computers, laptop computers, tablet computers, personal digital assistants (PDAs), and other devices that are equipped with an addressable communications interface for communicating with the IoT network.
Any of the communications network(s) 130 and 135 may include, but not be limited to, any one of a combination of different types of suitable communications networks such as, for example, broadcasting networks, cable networks, public networks (e.g., the Internet), private networks, wireless networks, cellular networks, or any other suitable private and/or public networks. Further, any of the communications network(s) 130 and 135 may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), metropolitan area networks (MANs), wide area networks (WANs), local area networks (LANs), or personal area networks (PANs). In addition, any of the communications network(s) 130 and 135 may include any type of medium over which network traffic may be carried including, but not limited to, coaxial cable, twisted-pair wire, optical fiber, a hybrid fiber coaxial (HFC) medium, microwave terrestrial transceivers, radio frequency communication mediums, white space communication mediums, ultra-high frequency communication mediums, satellite communication mediums, or any combination thereof.
Any of the first user device(s) 120 (e.g., 122 or 124) and any of the second user device(s) 140 (e.g., 142 or 144) may include one or more communications antennas. Communications antennas may be any suitable type of antenna corresponding to the communications protocols used by the first user device(s) 120 and the second user device(s) 140. Some non-limiting examples of suitable communications antennas include Bluetooth antennas, Wi-Fi antennas, IEEE 802.11 family of standards compatible antennas, directional antennas, non-directional antennas, dipole antennas, folded dipole antennas, patch antennas, MIMO antennas, or the like. The communications antenna may be communicatively coupled to a radio component to transmit and/or receive signals, such as communications signals, to and/or from the first user device(s) 120 (e.g., 122 or 124) and the second user device(s) 140 (e.g., 142 or 144).
Any of the first user device(s) 120 (e.g., 122 or 124) and any of the second user device(s) 140 (e.g., 142 or 144) may include any suitable radio and/or transceiver for transmitting and/or receiving radio frequency (RF) signals in the bandwidth and/or channels corresponding to the communications protocols utilized by any of the first user device(s) 120 and the second user device(s) 140 to communicate with each other. The radio components may include hardware and/or software to modulate and/or demodulate communications signals according to pre-established transmission protocols. The radio components may further have hardware and/or software instructions to communicate via one or more Bluetooth, Wi-Fi, and/or Wi-Fi Direct protocols, as standardized by the Bluetooth and the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards. In certain example embodiments, the radio component, in cooperation with the communications antennas, may be configured to communicate via 2.4 GHz channels (e.g., 802.11b, 802.11g, 802.11n, 802.11ax), 5 GHz channels (e.g., 802.11n, 802.11ac, 802.11ax), or 60 GHZ channels (e.g., 802.11ad, 802.11ay). The communications antennas may operate at 28 GHz and 40 GHz. It should be understood that this list of communication channels in accordance with certain 802.11 standards is only a partial list and that other 802.11 standards may be used (e.g., Next Generation Wi-Fi, or other standards). In some embodiments, non-Wi-Fi protocols may be used for communications between devices, such as Bluetooth, dedicated short-range communication (DSRC), Ultra-High Frequency (UHF) (e.g., IEEE 802.11af, IEEE 802.22), white band frequency (e.g., white spaces), or other packetized radio communications. The radio component may include any known receiver and baseband suitable for communicating via the communications protocols. The radio component may further include a low noise amplifier (LNA), additional signal amplifiers, an analog-to-digital (A/D) converter, one or more buffers, and digital baseband.
In some embodiments, and with reference to
It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.
In some embodiments, at step 202, the process 200 for evaluating and improving reading skills in real time may include presenting reading content at more than one user device. The reading content may be presented at a child's user device or a student's user device, where the student may be a child or an adult. The same reading content may be further presented at another user device. The other user device may be a parent's user device, another adult's user device, a teacher's user device, or another user device associated with a user who is assisting the child or student with developing his or her reading skills.
In some embodiments, at step 204, the child or student may begin to read aloud the reading content that is presented at his or her user device. In some embodiments, at step 206, the child's user device or the student's user device may listen to the speech of the child or the student. In some embodiments, at step 208, the child's user device or the student's user device may apply a real-time speech-to-text conversion system to the speech of the child or the student. In some embodiments, at step 210, the real-time speech-to-text conversion system may produce a transcribed text based on the speech of the child or the student.
In some embodiments, at step 212, the transcribed text may be processed and compared to the reading content that is presented at the child's user device or the student's user device. In some embodiments, at step 214, it may be determined whether the transcribed text matches the word contained in the reading content that is presented at the child's user device or the student's user device. Matching may involve evaluating the child's or student's pronunciation relative to the pronunciation as intended by the author. In some embodiments, at step 216, if the transcribed text is determined to match the word contained in the reading content, a response may be indicated at both user devices (the child's user device or the student's user device, and the parent's user device, another adult's user device, or a teacher's user device, etc.) in the reading content, where the response may indicate a successful match between the transcribed text and the word contained in the reading content. In some embodiments, at step 218, if the transcribed text is determined to not match the word contained in the reading content, the word may be highlighted (for example, in red), and a breakdown of the word may be indicated in order to assist the child or student with reading the word if the user has attempted the word multiple times in error. In some embodiments, the process 200 may then return to step 216, where the response may be indicated at both user devices (the child's user device or the student's user device, and the parent's user device, another adult's user device, or a teacher's user device, etc.) in the reading content, where the response may indicate a word breakdown associated with the word contained in the reading content.
In some embodiments, steps 212, 214, 216, and 218 may be executed one word at a time. In other embodiments, steps 212, 214, 216, and 218 may be executed multiple words at a time.
In some embodiments, at step 220, metrics associated with the child's or the student's reading skills may be stored. The metrics may be based on whether the transcribed text matches the words presented in the reading content at the child's user device or the student's user device. For example, a word that is read in error may be identified as an error and stored in the user's record to be used for scoring and other analytics. The metrics may further include data regarding content selection, content management, reading progress for each book, and date and time information associated with the user's use of the application. In some embodiments, at step 222, the metrics associated with the user's or the student's reading skills may be presented at the various user devices, including the child's or the student's user device, and/or the parent's or adult's or teacher's user device. In some embodiments, at step 224, after the response is indicated in the reading content, the child or the student may continue reading. In such embodiments, the user device may continue to listen to the child or the student's speech. In other embodiments, if the child or the student opts to not continue reading, the process 200 may move to step 222, where the metrics associated with the user's or the student's reading skills may be presented at the various user devices, including the child's or the student's user device, and/or the parent's or adult's or teacher's user device.
In some embodiments, the voice data 304, reader metrics 306, cohort metrics 308, and aggregate metrics 310 may be used for system interpretation 312. For example, system interpretation 312 may include the user device being configured to evaluate the child's or student's reading levels based on the voice data 304, the reader metrics 306, the cohort metrics 308, and the aggregate metrics 310. In some embodiments, the voice data 304, reader metrics 306, cohort metrics 308, and aggregate metrics 310 may be used for educator interpretation 314, where a teacher, parent, or adult may evaluate the child's or student's reading levels based on the voice data 304, the reader metrics 306, the cohort metrics 308 (e.g., metrics for each grade level, metrics for each educator, metrics for each school/district/region/state, etc.), and the aggregate metrics 310. In certain embodiments, the educator interpretation 314 may rely at least in part on the system interpretation 312. In certain embodiments, the educator interpretation 314 may rely at least in part on standards interpretation 316, which may account for other standards that may be used to assess reading skills, including but not limited to proper pronunciation, proper enunciation, proper spelling, reading speed, reading comprehension, word recognition, and/or other reading metrics (e.g., for the user's grade level) as defined by various scoring standards. In some embodiments, the educator interpretation 314 may be further used to provide educator notes and comments to the user generate informed actions 318, which may include recommendations and/or suggested improvements in order for the child or student to improve his or her reading skills.
In some embodiments, the presentation layer 600A may include the following views at a display interface of a user device: a sign-up view 602 (to allow users to create accounts for accessing the system for evaluating and improving reading skills), a sign-in view 604 (to allow users to log in to their accounts), a forgot password view 606 (to allow users to recover a password for their accounts), a mobile verification view 608 (to verify users before allowing them to log in to their accounts), a select profile view 610 (to allow users to choose their profile before accessing their accounts), a join-as-a-child view 612 (to allow children to create and access accounts that are tied to a parent or other adult), a family access code view 614 (to allow parents and children to connect to a single reading session via an access code), a walkthrough view 616 (to allow users to receive a guided tour of the graphical user interface (GUI)), an add child view 618 (to allow parents or other adults to connect a child to their accounts), a home view 620 (to allow users to access a home screen after logging in to their accounts), an all-book view 622 (to allow users to view all books that are available to them), a top pick view 624 (to allow users to view currently popular books), a search view 626 (to allow users to search for a particular book), a filter view 628 (to allow users to view a subset of books based on filtering criteria), a call view 630 (to allow users to initiate a call through the system), a user call selector view 632 (to allow users to select a call recipient through the system, such as a parent or a teacher), a go-to-page view 634 (to allow users to select a particular page within a book for viewing), a book detail view 636 (to allow users to obtain more information about a particular book), a read book view 638 (to allow users to peruse reading content within a book), a favorites view 640 (to allow users to customize a list of favorite books within their accounts), a notification view 642 (to allow users to receive and view notifications), a user profile view 644 (to allow users to view their profiles), an edit profile view 646 (to allow users to edit their profiles), a switch profile view 648 (to allow users to change between profiles), an edit child profile view 650 (to allow parents or adults to edit a profile associated with a child connected to their accounts), a change password view 652 (to allow users to switch passwords), a contact us view 654 (to allow users to reach out to system developers with questions and/or comments), a subscription view 656 (to allow users to view their subscription statuses), and an about us view 658 (to allow users to obtain more information about the system developers).
In some embodiments, the business layer 600B may include a registration manager 660 (for managing user registrations), a Facebook account manager 662 (for connecting users' Facebook accounts with user registrations), a Google account manager 664 (for connecting users' Google accounts with user registrations), an Apple manager 666 (for connecting users' Apple accounts with user registrations), an API handler 668 (for managing communications), an API encryption and decryption manager 670 (for managing communications), a video call manager 672 (for managing video calls during reading sessions), a command center manager 674 (for managing system operations), a user session manager 676 (for managing reading sessions), a subscription manager 678 (for managing user subscriptions), and a call notification manager 680 (for managing call notifications between users).
In some embodiments, the data layer 700A may include a user default 702 (for managing default user settings), an Amazon S3 bucket manager 704 (for managing objects stored within buckets associated with the user's account), a StoreKit manager 706 (for managing user purchases within the system), and crash logs 708 (for recording historical data associated with crashes of the system).
In some embodiments, the core layer 700B may include a push-kit manager 710 (for managing notifications), a user default 712 (for managing default user settings), a key-chain 714 (for local storage of data), a camera/gallery handler 716 (for managing images and/or videos associated with the system), a network connectivity manager 718 (for managing wireless or wired connectivity of the system), Google analytics 720 (for analyzing data associated with users), a server communication manager 722 (for managing communications), a Skyepub book reading SDK 724 (for managing and facilitating EPUB files), a device manager 726 (for managing user devices), and an API manager 728 (for managing communications).
Examples, as described herein, may include or may operate on logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware may be specifically configured to carry out a specific operation (e.g., hardwired). In another example, the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the execution units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer-readable medium when the device is operating. In this example, the execution units may be a member of more than one module. For example, under operation, the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module at a second point in time.
The machine (e.g., computer system) 900 may include a hardware processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 904 and a static memory 906, some or all of which may communicate with each other via an interlink (e.g., bus) 908. The machine 900 may further include a graphics display device 910, an alphanumeric input device 912 (e.g., a keyboard), and a reading skills evaluation device 914. In an example, the graphics display device 910 and the alphanumeric input device 912 may be a touch screen display. The machine 900 may additionally include a storage device (i.e., drive unit) 916, a network interface device/transceiver 920 coupled to antenna(s) 930, and one or more sensors 928, such as a global positioning system (GPS) sensor, a compass, an accelerometer, or other sensor. The machine 900 may include an output controller 934, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate with or control one or more peripheral devices (e.g., a printer, a card reader, etc.)).
The storage device 916 may include a machine-readable medium 922 on which is stored one or more sets of data structures or instructions 924 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 924 may also reside, completely or at least partially, within the main memory 904, within the static memory 906, or within the hardware processor 902 during execution thereof by the machine 900. In an example, one or any combination of the hardware processor 902, the main memory 904, the static memory 906, or the storage device 916 may constitute machine-readable media
While the machine-readable medium 922 is illustrated as a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 924.
Various embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc.
The term “machine-readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 900 and that cause the machine 900 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding, or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories and optical and magnetic media. In an example, a massed machine-readable medium includes a machine-readable medium with a plurality of particles having resting mass. Specific examples of massed machine-readable media may include non-volatile memory, such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), or electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 924 may further be transmitted or received over a communications network 926 using a transmission medium via the network interface device/transceiver 920 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communications networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), plain old telephone (POTS) networks, wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, and peer-to-peer (P2P) networks, among others. In an example, the network interface device/transceiver 920 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 926. In an example, the network interface device/transceiver 920 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 900 and includes digital or analog communications signals or other intangible media to facilitate communication of such software. The operations and processes described and shown above may be carried out or performed in any suitable order as desired in various implementations. Additionally, in certain implementations, at least a portion of the operations may be carried out in parallel. Furthermore, in certain implementations, less than or more than the operations described may be performed.
Some embodiments may be used in conjunction with various devices and systems, for example, a personal computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a personal digital assistant (PDA) device, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a consumer device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless access point (AP), a wired or wireless router, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a wireless video area network (WVAN), a local area network (LAN), a wireless LAN (WLAN), a personal area network (PAN), a wireless PAN (WPAN), and the like.
Some embodiments may be used in conjunction with one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a personal communication system (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable global positioning system (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a multiple input multiple output (MIMO) transceiver or device, a single input multiple output (SIMO) transceiver or device, a multiple input single output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, digital video broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a smartphone, a wireless application protocol (WAP) device, or the like.
Some embodiments may be used in conjunction with one or more types of wireless communication signals and/or systems following one or more wireless communication protocols, for example, radio frequency (RF), infrared (IR), frequency-division multiplexing (FDM), orthogonal FDM (OFDM), time-division multiplexing (TDM), time-division multiple access (TDMA), extended TDMA (E-TDMA), general packet radio service (GPRS), extended GPRS, code-division multiple access (CDMA), wideband CDMA (WCDMA), CDMA 2000, single-carrier CDMA, multi-carrier CDMA, multi-carrier modulation (MDM), discrete multi-tone (DMT), Bluetooth®, global positioning system (GPS), Wi-Fi, Wi-Max, ZigBee®, ultra-wideband (UWB), global system for mobile communications (GSM), 2G, 2.5G, 3G, 3.5G, 4G, fifth generation (5G) mobile networks, 3GPP, long term evolution (LTE), LTE advanced, enhanced data rates for GSM Evolution (EDGE), or the like. Other embodiments may be used in various other devices, systems, and/or networks.
Implementations of the systems, apparatuses, devices, and methods disclosed herein may comprise or utilize one or more devices that include hardware, such as, for example, one or more processors and system memory, as discussed herein. An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or any combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of non-transitory computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause the processor to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions, such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
A memory device can include any one memory element or a combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and non-volatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory device may incorporate electronic, magnetic, optical, and/or other types of storage media. In the context of this document, a “non-transitory computer-readable medium” can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), and a portable compact disc read-only memory (CD ROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, since the program can be electronically captured, for instance, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
Those skilled in the art will appreciate that the present disclosure may be practiced in network computing environments with many types of computer system configurations, including in-dash vehicle computers, personal computers, desktop computers, laptop computers, message processors, handheld devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by any combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both the local and remote memory storage devices.
Further, where appropriate, the functions described herein can be performed in one or more of hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description, and claims refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.
It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein for purposes of illustration and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).
At least some embodiments of the present disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer-usable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
In some embodiments, when the application is opened by a user on a user device (e.g., a mobile device), a logo may be displayed on the display screen. The display screen may then output a transition to a start screen after several seconds. In some embodiments, as displayed in
In some embodiments, the start screen may include a walkthrough output including multiple display screen outputs, each with an image at the top of the display screen and a tagline at the bottom of the display screen. The walkthrough output may describe features of the application. Because the walkthrough output may include more than one display screen output, the user may transition the display screen from one display screen output to another display screen output by tapping a finger on the display screen, swiping a finger on the display screen, or pressing a physical or displayed button on the user device. In one embodiment, as depicted in
In some embodiments, when a user has completed the walkthrough output, the application may display a registration screen output 1000E. In some embodiments, the registration screen output 1000E may provide options for a user to register for an account for himself/herself and/or his/her children, to sign in using an email account (if the user already has an account), or to sign in using a social media account (if the user already has an account), for example, the user's Facebook™, Google™, or Apple™ account. The registration screen output 1000E may also provide an option for a user to “Join as a Child.” In order to register for an account, a user may be required to provide his or her name, email address, and password in order to create an account or profile. In some embodiments, the user may be able to register using his or her account for a different social media or application provider, such as his or her Facebook™, Google™, or Apple™ account.
In some embodiments, a parental code may be set during the registration process. The parental code may be a permission access code that a parent or adult provides to a child in order for the child to gain access to a reading session that is connected to the parent or the adult. As depicted in
In some embodiments, after a user has selected a sign-in option or has input a parental access code, the display screen may display a profile selection output 1100B, as depicted in
In some embodiments, if the user selects a registration or sign-up option at the registration screen output 1000E, the sign-up page 1100E may be displayed at the display screen. The user may be requested to provide a name, email address, phone number, a password, and a password confirmation in order to proceed with account registration. In some embodiments, the user may be requested to agree with the terms and conditions and the privacy policy associated with the application in order to proceed with account registration. The user may select “Continue” after the data fields have been populated with the required information. In other embodiments, the user may select the Facebook icon, the Google icon, or the Apple icon in order to use an account associated with the social media provider to complete account registration.
In some embodiments, after registering for the user's account, the display screen may display a prompt 1200A for a parent to enter a permission access code, as displayed in
Similar to
In some embodiments, as depicted in
In some embodiments, after the user selects a book, reading content 1400A associated with the book may be displayed at the display screen. The reading content 1400A that is displayed may be the beginning of the book (in the case of the user selecting a new book) or the last-read portion of a book (in the case of the user selecting a partially-read book). The user may have the option to initiate a video call at the bottom of the screen. The reading content 1400A may include a “like” option for the user to add the book to his/her “my favorites” list of books. Once a video call is initiated and the call recipients have been identified, the video call initiation button at the bottom of the screen may be replaced by an “End Call” button, and icons to indicate the participants of the video call may be displayed in the top right hand corner of the display screen 1400B, as depicted in
As depicted in
As depicted in
In some embodiments, although not depicted in
As depicted in
As depicted in
In some embodiments, as depicted in
In some embodiments,
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, although not depicted in
In some embodiments, the application may be compatible with various operating software. For example, the application may be compatible with the Android™ OS, such as versions 6.0 through 10.0. The application may also be compatible with iOS™, such as versions 12.0 through 14.0. Thus, the application may support an iPhone® 6, iPhone® 6S, iPhone® 6 Plus, iPhone® 6S Plus, iPhone® 7, iPhone® 7 Plus, iPhone® 8, iPhone® 8 Plus, iPhone® X, iPhone® XS Max, iPhone® XR, or iPhone® 11. In some embodiments, an associated application programming interface (API) may be created in NodeJS. In some embodiments, the API may be in a Json format.
In other embodiments, the application may be capable of functioning as a web application. In some embodiments, the web application may be created with the Hypertext Preprocessor (PHP) framework CodeIgniter. The web application may be supported by various browsers. For example, the web application can be used on Mozilla™ Firefox™ 60.0 to 67.0, Google™ Chrome™ 65.0 to 74.0, Microsoft™ Internet Explorer™ 11.0 and Edge™, and Safari™ 11 to 12.1.1.
At block 2005, a user selection of an electronic book may be received in an application executing on a user device associated with a first user. In some embodiments, the first user may be one of a child or a language student, and wherein the second user may be one of a parent, a teacher, or a language-fluent person. In some embodiments, the user selection of the electronic book may involve the requesting of a list of electronic books via a search function in the application, the receipt of the list of electronic books via the search function, and selecting the electronic book from the list of electronic books. In some embodiments, the search function is configured to allow the first user or the second user to search for the list of electronic books by at least one of a book title, an author name, or a keyword.
At block 2010, the user selection of the electronic book may be opened in the application.
At block 2015, a video call or an audio call may be initiated with a second user in the application. In some embodiments, the video call or the audio call may be initiated with the second user and a third user in the application.
At block 2020, one or more first spoken words associated with a first reading content of the electronic book may be received from the first user.
At block 2025, the one or more first spoken words may be transcribed to first text via the application.
At block 2030, it may be determined if the first text matches the first reading content of the electronic book. In some embodiments, the application may be configured to highlight the first reading content of the electronic book as the one or more first spoken words are received from the first user.
At block 2035, the determination of whether the first text matches the first reading content of the electronic book may be transmitted to the first user and the second user.
In some embodiments, one or more second spoken words associated with a second reading content of the electronic book may be received from the first user. The one or more second spoken words may be transcribed to second text via the application. It may then be determined if the second text matches the second reading content of the electronic book. The determination of whether the second text matches the second reading content of the electronic book may be transmitted to the first user and the second user.
Unless otherwise noted, the terms used herein are to be understood according to conventional usage by those of ordinary skill in the relevant art. In addition to the definitions of terms provided below, it is to be understood that as used in the specification and in the claims, “a” or “an” can mean one or more, depending upon the context in which it is used.
Throughout this application, the term “include,” “include(s)” or “including” means “including but not limited to.” Note that certain embodiments may be described relating to a single user device, but the corresponding description should be read to include embodiments of two or more user devices. Different features, variations, and multiple different embodiments are shown and described herein with various details. What has been described in this application at times in terms of specific embodiments is done for illustrative purposes only and without the intent to limit or suggest that what has been conceived is only one particular embodiment or specific embodiments. It is to be understood that this disclosure is not limited to any single specific embodiments or enumerated variations. Many modifications, variations and other embodiments will come to mind of those skilled in the art, and which are intended to be and are in fact covered by this disclosure. It is indeed intended that the scope of this disclosure should be determined by a proper legal interpretation and construction of the disclosure, including equivalents, as understood by those of skill in the art relying upon the complete disclosure present at the time of filing.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain implementations could include, while other implementations do not include, certain features, elements, and/or operations. Thus, such conditional language generally is not intended to imply that features, elements, and/or operations are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or operations are included or are to be performed in any particular implementation.
What has been described herein in the present specification and annexed drawings includes examples of systems and methods that, individually and in combination, permit the evaluating and improving reading skills in real-time. It is, of course, not possible to describe every conceivable combination of components and/or methods for purposes of describing the various elements of the disclosure, but it can be recognized that many further combinations and permutations of the disclosed elements are possible. Accordingly, it may be apparent that various modifications can be made to the disclosure without departing from the scope thereof. In addition, or as an alternative, other embodiments of the disclosure may be apparent from consideration of the specification and annexed drawings, and practice of the disclosure as presented herein. It is intended that the examples put forth in the specification and annexed drawings be considered, in all respects, as illustrative and not limiting. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
This application claims the benefit of U.S. Provisional Application No. 63/209,141, filed Jun. 10, 2021, the disclosure of which is incorporated by reference as set forth in full.
Number | Date | Country | |
---|---|---|---|
63209141 | Jun 2021 | US |